41 research outputs found

    Probing the Landscape: Toward a Systematic Taxonomy of Online Peer Assessment Systems in Education

    Get PDF
    We present the research framework for a taxonomy of online educational peer-assessment systems. This framework enables researchers in technology-supported peer assessment to understand the current landscape of technologies supporting student peer review and assessment, specifically, its affordances and constraints. The framework helps identify the major themes in existing and potential research and formulate an agenda for future studies. It also informs educators and system design practitioners about use cases and design options

    Toward Better Training in Peer Assessment: Does Calibration Help?

    Get PDF
    For peer assessments to be helpful, student reviewers need to submit reviews of good quality. This requires certain training or guidance from teaching staff, lest reviewers read each other\u27s work uncritically, and assign good scores but offer few suggestions. One approach to improving the review quality is calibration. Calibration refers to comparing students\u27 individual reviews to a standard—usually a review done by teaching staff on the same reviewed artifact. In this paper, we categorize two modes of calibration for peer assessment and discuss our experience with both of them in a pilot study with Expertiza system

    Assessing the Quality of Automatic Summarization for Peer Review in Education

    Get PDF
    ABSTRACT Technology supported peer review has drawn many interests from educators and researchers. It encourages active learning, provides timely feedback to students and multiple perspectives on their work. Currently, online peer review systems allow a student's work to be reviewed by a handful of their peers. While this is quite a good way to obtain a high degree of confidence, reading a large amount of feedback could be overwhelming. Our observation shows that the students even ignore some feedback when it gets too large. In this work, we try to automatically summarize the feedback by extracting the similar content that is mentioned by the reviewers, which would capture the strength and weaknesses of the work. We evaluate different auto summarization algorithms and length of the summary with educational peer review dataset, which was rated by a human. In general, the students found that medium-size generated summaries (5-10 sentences) encapsulate the context of the reviews, are able to convey the intent of the reviews, and help them to judge the quality of the work

    Session S1F Work in Progress -Reuse of Homework and Test Questions: When, Why, and How to Maintain Security?

    No full text
    Abstract -It is always difficult to obtain good homework problems and test questions. Instructors can save timeand polish their work-by using the same questions they used before. And they would do so semester after semester, except for one obvious risk: that students would simply copy or memorize the answers rather than learning the material. This paper presents the results of a survey of hundreds of postsecondary educators. How frequently do they reuse questions, and how do they prevent students from getting advance access to the answers? How much trouble have they had with "files" kept by fraternities, sororities, and ethnic groups? Do they consider it cheating to copy or memorize answers? Has the increasing use of electronic resources made it easier or harder to maintain security? Do they typically alter their policies when they begin to put material on line? The answers to these questions can guide all of us to more realistic and secure reuse policies

    Electronic peer review and peer grading in computer-science courses

    No full text
    We have implemented a peer-grading system for review of student assignments over the World-Wide Web and used it in approximately eight computer-science courses. Students prepare their assignments and submit them to our Peer Grader (PG) system. Other students are then assigned to review and grade the assignments. The system allows authors and reviewers to communicate with authors being able to update their submissions. Unique features of our approach include the ability to submit arbitrary sets of Web pages for review, and mechanisms for encouraging careful review of submissions. We have used the system to produce high-quality compilations of student work. Our assignment cycle consists of six phases, from signing up for an assignment to Web publishing of the final result. Based upon our experience with PG, we offer suggestions for improving the system to make it more easily usable by students at all levels. 1 Peer Review in the Classroom Peer review is a concept that has served the academic community well for several generations. Thus, it is not surprising that it has found its way into the classroom. Dozens of studies report on different aspects of peer review, peer assessment, and peer grading in an academic setting. A comprehensive survey can be found in Topp 98. Experiments with peer assessment of writing go back more than 25 years.4 Peer review has been used in a wide variety of disciplines, among them accounting,8 engineering,7, 10 mathematics,3 and mathematics education.6 However, electronic peer review experiments have been much rarer. Although the Da dalus Integrated Writing Environment1 is widely used for peer assessment of student writing, only a few computer-mediated peer-review experiments have taken place in other fields. An early project in computer-science and nursing education wa

    Comparing Approaches to Wiki Assessment

    No full text
    Many instructors now assign homework to be done on wikis. Wikis are a good collaborative space, but assessment is a problem. Students tend to write more on a wiki than when they are working on another shared document; they can modify pages created by other students, and the content can be continually revised. Several approaches to assessment will be described, including instructor assessment, peer assessment, and self-assessment. The effectiveness of these strategies will be compared, using data from 10 courses at our institution that used wikis in 2007–08. Questions such as the following will be answered: How much do students feel that collaboration is enhanced by writing on a wiki? What kind of assignments, and what kind of feedback best enhances critical thinking? What factors influence students\u27 perceptions of the feedback they receive? This session should provide ideas on how to use wikis effectively in your classes
    corecore